123 research outputs found

    On Non-Elitist Evolutionary Algorithms Optimizing Fitness Functions with a Plateau

    Full text link
    We consider the expected runtime of non-elitist evolutionary algorithms (EAs), when they are applied to a family of fitness functions with a plateau of second-best fitness in a Hamming ball of radius r around a unique global optimum. On one hand, using the level-based theorems, we obtain polynomial upper bounds on the expected runtime for some modes of non-elitist EA based on unbiased mutation and the bitwise mutation in particular. On the other hand, we show that the EA with fitness proportionate selection is inefficient if the bitwise mutation is used with the standard settings of mutation probability.Comment: 14 pages, accepted for proceedings of Mathematical Optimization Theory and Operations Research (MOTOR 2020). arXiv admin note: text overlap with arXiv:1908.0868

    Dissipative systems: uncontrollability, observability and RLC realizability

    Full text link
    The theory of dissipativity has been primarily developed for controllable systems/behaviors. For various reasons, in the context of uncontrollable systems/behaviors, a more appropriate definition of dissipativity is in terms of the dissipation inequality, namely the {\em existence} of a storage function. A storage function is a function such that along every system trajectory, the rate of increase of the storage function is at most the power supplied. While the power supplied is always expressed in terms of only the external variables, whether or not the storage function should be allowed to depend on unobservable/hidden variables also has various consequences on the notion of dissipativity: this paper thoroughly investigates the key aspects of both cases, and also proposes another intuitive definition of dissipativity. We first assume that the storage function can be expressed in terms of the external variables and their derivatives only and prove our first main result that, assuming the uncontrollable poles are unmixed, i.e. no pair of uncontrollable poles add to zero, and assuming a strictness of dissipativity at the infinity frequency, the dissipativities of a system and its controllable part are equivalent. We also show that the storage function in this case is a static state function. We then investigate the utility of unobservable/hidden variables in the definition of storage function: we prove that lossless autonomous behaviors require storage function to be unobservable from external variables. We next propose another intuitive definition: a behavior is called dissipative if it can be embedded in a controllable dissipative {\em super-behavior}. We show that this definition imposes a constraint on the number of inputs and thus explains unintuitive examples from the literature in the context of lossless/orthogonal behaviors.Comment: 26 pages, one figure. Partial results appeared in an IFAC conference (World Congress, Milan, Italy, 2011

    Fluctuation scaling in complex systems: Taylor's law and beyond

    Full text link
    Complex systems consist of many interacting elements which participate in some dynamical process. The activity of various elements is often different and the fluctuation in the activity of an element grows monotonically with the average activity. This relationship is often of the form "fluctuations≈const.×averageαfluctuations \approx const.\times average^\alpha", where the exponent α\alpha is predominantly in the range [1/2,1][1/2, 1]. This power law has been observed in a very wide range of disciplines, ranging from population dynamics through the Internet to the stock market and it is often treated under the names \emph{Taylor's law} or \emph{fluctuation scaling}. This review attempts to show how general the above scaling relationship is by surveying the literature, as well as by reporting some new empirical data and model calculations. We also show some basic principles that can underlie the generality of the phenomenon. This is followed by a mean-field framework based on sums of random variables. In this context the emergence of fluctuation scaling is equivalent to some corresponding limit theorems. In certain physical systems fluctuation scaling can be related to finite size scaling.Comment: 33 pages, 20 figures, 2 tables, submitted to Advances in Physic

    Non-Parametric Approximations for Anisotropy Estimation in Two-dimensional Differentiable Gaussian Random Fields

    Full text link
    Spatially referenced data often have autocovariance functions with elliptical isolevel contours, a property known as geometric anisotropy. The anisotropy parameters include the tilt of the ellipse (orientation angle) with respect to a reference axis and the aspect ratio of the principal correlation lengths. Since these parameters are unknown a priori, sample estimates are needed to define suitable spatial models for the interpolation of incomplete data. The distribution of the anisotropy statistics is determined by a non-Gaussian sampling joint probability density. By means of analytical calculations, we derive an explicit expression for the joint probability density function of the anisotropy statistics for Gaussian, stationary and differentiable random fields. Based on this expression, we obtain an approximate joint density which we use to formulate a statistical test for isotropy. The approximate joint density is independent of the autocovariance function and provides conservative probability and confidence regions for the anisotropy parameters. We validate the theoretical analysis by means of simulations using synthetic data, and we illustrate the detection of anisotropy changes with a case study involving background radiation exposure data. The approximate joint density provides (i) a stand-alone approximate estimate of the anisotropy statistics distribution (ii) informed initial values for maximum likelihood estimation, and (iii) a useful prior for Bayesian anisotropy inference.Comment: 39 pages; 8 figure

    A.N. Kolmogorov’s defence of Mendelism

    Get PDF
    In 1939 N.I. Ermolaeva published the results of an experiment which repeated parts of Mendel’s classical experiments. On the basis of her experiment she concluded that Mendel’s principle that self-pollination of hybrid plants gave rise to segregation proportions 3:1 was false. The great probability theorist A.N. Kolmogorov reviewed Ermolaeva’s data using a test, now referred to as Kolmogorov’s, or Kolmogorov-Smirnov, test, which he had proposed in 1933. He found, contrary to Ermolaeva, that her results clearly confirmed Mendel’s principle. This paper shows that there were methodological flaws in Kolmogorov’s statistical analysis and presents a substantially adjusted approach, which confirms his conclusions. Some historical commentary on the Lysenko-era background is given, to illuminate the relationship of the disciplines of genetics and statistics in the struggle against the prevailing politically-correct pseudoscience in the Soviet Union. There is a Brazilian connection through the person of Th. Dobzhansky

    Statistical modeling of ground motion relations for seismic hazard analysis

    Full text link
    We introduce a new approach for ground motion relations (GMR) in the probabilistic seismic hazard analysis (PSHA), being influenced by the extreme value theory of mathematical statistics. Therein, we understand a GMR as a random function. We derive mathematically the principle of area-equivalence; wherein two alternative GMRs have an equivalent influence on the hazard if these GMRs have equivalent area functions. This includes local biases. An interpretation of the difference between these GMRs (an actual and a modeled one) as a random component leads to a general overestimation of residual variance and hazard. Beside this, we discuss important aspects of classical approaches and discover discrepancies with the state of the art of stochastics and statistics (model selection and significance, test of distribution assumptions, extreme value statistics). We criticize especially the assumption of logarithmic normally distributed residuals of maxima like the peak ground acceleration (PGA). The natural distribution of its individual random component (equivalent to exp(epsilon_0) of Joyner and Boore 1993) is the generalized extreme value. We show by numerical researches that the actual distribution can be hidden and a wrong distribution assumption can influence the PSHA negatively as the negligence of area equivalence does. Finally, we suggest an estimation concept for GMRs of PSHA with a regression-free variance estimation of the individual random component. We demonstrate the advantages of event-specific GMRs by analyzing data sets from the PEER strong motion database and estimate event-specific GMRs. Therein, the majority of the best models base on an anisotropic point source approach. The residual variance of logarithmized PGA is significantly smaller than in previous models. We validate the estimations for the event with the largest sample by empirical area functions. etc

    Unsicherheit und Wahrscheinlichkeit

    No full text
    • 

    corecore